Goto

Collaborating Authors

 Eastern Norway


Forthcoming machine learning and AI seminars: April 2026 edition

AIHub

This post contains a list of the AI-related seminars that are scheduled to take place between 2 April and 31 May 2026. All events detailed here are free and open for anyone to attend virtually. What Do Our Benchmarks Actually Measure? Vukosi Marivate (University of Pretoria) University of Michigan Zoom link is here . Optimization Over Trained Neural Networks: What, Why, and How? Thiago Serra Azevedo Silva (University of Iowa) Association of European Operational Research Societies To receive the seminar link, sign up to the mailing list .


Stepwise Variational Inference with Vine Copulas

Griesbauer, Elisabeth, Rønneberg, Leiv, Frigessi, Arnoldo, Czado, Claudia, Haff, Ingrid Hobæk

arXiv.org Machine Learning

We propose stepwise variational inference (VI) with vine copulas: a universal VI procedure that combines vine copulas with a novel stepwise estimation procedure of the variational parameters. Vine copulas consist of a nested sequence of trees built from copulas, where more complex latent dependence can be modeled with increasing number of trees. We propose to estimate the vine copula approximate posterior in a stepwise fashion, tree by tree along the vine structure. Further, we show that the usual backward Kullback-Leibler divergence cannot recover the correct parameters in the vine copula model, thus the evidence lower bound is defined based on the Rényi divergence. Finally, an intuitive stopping criterion for adding further trees to the vine eliminates the need to pre-define a complexity parameter of the variational distribution, as required for most other approaches. Thus, our method interpolates between mean-field VI (MFVI) and full latent dependence. In many applications, in particular sparse Gaussian processes, our method is parsimonious with parameters, while outperforming MFVI.


Hassan Took a Bike Ride. Now He's One of the Thousands Missing in Gaza

WIRED

In a place denied access to basic forensic technology--and where people disappear into Israeli detention--the fate of thousands remains unknown. One of them is an autistic teenager. In the early morning dark, Abeer Skaik turned to her husband, Ali Al-Qatta, and said that today would be the day they would find their son. Ali nodded in silence, and she handed him the stack of flyers. Each bore a photograph of 16-year-old Hassan smiling widely, his shoulders loose, wearing a plain red T-shirt. He is looking directly at the camera, unguarded. On top of the page, in large letters, Abeer had written a single word in bold red ink: --an appeal. Abeer watched as Ali stepped into a car with a few close friends and drove away. They started the 30-kilometer trip south, from al-Tuffah, east of Gaza City, to the European Hospital in Khan Younis. They had heard that a group of people detained by Israel, including children, would be released there. The gate was already crowded. Families stood shoulder to shoulder, wrapped in blankets against the cold, clutching photographs and ID cards. Ali distributed the flyers among his friends. When the buses of released detainees arrived, he and the others moved slowly through the narrow gaps between clusters of people. Some of those who had just been released were being pulled into embraces. Ali waited at the edge of each reunion. "Have you seen my son?" he asked. One after another, people shook their heads.


A Clarinetist, a High School Student, and Some Climate Deniers Write a Science Paper

Mother Jones

Don't miss this: Double your impact! We're able to stand strong because we're funded by readers like you. Support journalism that doesn't flinch. Don't miss this: Tomorrow is the final day of our $50,000 match We're able to stand strong because we're funded by readers like you. Support journalism that doesn't flinch.


How Pokémon Go is giving delivery robots an inch-perfect view of the world

MIT Technology Review

Niantic's AI spinout is training a new world model using 30 billion images of urban landmarks crowdsourced from players. Pokémon Go was the world's first augmented-reality megahit. Released in 2016 by the Google spinout Niantic, the AR twist on the juggernaut Pokémon franchise fast became a global phenomenon. From Chicago to Oslo to Enoshima, players hit the streets in the urgent hope of catching a Jigglypuff or a Squirtle or (with a huge amount of luck) an ultra-rare Galarian Zapdos hovering just out of reach, superimposed on the everyday world. "Five hundred million people installed that app in 60 days," says Brian McClendon, CTO at Niantic Spatial, an AI company that Niantic spun out in May last year. According to the video-game firm Scopely, which bought Pokémon Go from Niantic at the same time, the game still drew more than 100 million players in 2024, eight years after it launched.


Dirichlet Scale Mixture Priors for Bayesian Neural Networks

Arnstad, August, Rønneberg, Leiv, Storvik, Geir

arXiv.org Machine Learning

Neural networks are the cornerstone of modern machine learning, yet can be difficult to interpret, give overconfident predictions and are vulnerable to adversarial attacks. Bayesian neural networks (BNNs) provide some alleviation of these limitations, but have problems of their own. The key step of specifying prior distributions in BNNs is no trivial task, yet is often skipped out of convenience. In this work, we propose a new class of prior distributions for BNNs, the Dirichlet scale mixture (DSM) prior, that addresses current limitations in Bayesian neural networks through structured, sparsity-inducing shrinkage. Theoretically, we derive general dependence structures and shrinkage results for DSM priors and show how they manifest under the geometry induced by neural networks. In experiments on simulated and real world data we find that the DSM priors encourages sparse networks through implicit feature selection, show robustness under adversarial attacks and deliver competitive predictive performance with substantially fewer effective parameters. In particular, their advantages appear most pronounced in correlated, moderately small data regimes, and are more amenable to weight pruning. Moreover, by adopting heavy-tailed shrinkage mechanisms, our approach aligns with recent findings that such priors can mitigate the cold posterior effect, offering a principled alternative to the commonly used Gaussian priors.




PersonalSum: A User-Subjective Guided Personalized Summarization Dataset for Large Language Models

Neural Information Processing Systems

Models (LLMs) can sometimes surpass those annotated by experts, such as journalists, according to human evaluations. However, there is limited research on whether these generic summaries meet the individual needs of ordinary people.